DiscoverThe Nonlinear LibraryLW - Which LessWrong/Alignment topics would you like to be tutored in? [Poll] by Ruby
LW - Which LessWrong/Alignment topics would you like to be tutored in? [Poll] by Ruby

LW - Which LessWrong/Alignment topics would you like to be tutored in? [Poll] by Ruby

Update: 2024-09-19
Share

Description

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Which LessWrong/Alignment topics would you like to be tutored in? [Poll], published by Ruby on September 19, 2024 on LessWrong.

Would you like to be tutored in applied game theory, natural latents, CFAR-style rationality techniques, "general AI x-risk", Agent Foundations, anthropic

s

, or some

other topics

discussed on LessWrong?

I'm thinking about prototyping some topic-specific LLM tutor bots, and would like to prioritize topics that multiple people are interested in.

Topic-specific LLM tutors would be customized with things like pre-loaded relevant context, helpful system prompts, and more focused testing to ensure they work.

Note: I'm interested in topics that are written about on LessWrong, e.g. infra-bayesianism, and

not

magnetohydrodynamics".

I'm going to use the same poll infrastructure that

Ben Pace pioneered

recently. There is a

thread below

where you add and vote on topics/domains/areas where you might like tutoring.

1. Karma: upvote/downvote to express enthusiasm about there being tutoring for a topic.

2. Reacts: click on the agree react to indicate you personally would like tutoring on a topic.

3. New Poll Option. Add a new topic for people express interest in being tutored on.

For the sake of this poll, I'm more interested in whether you'd like tutoring on a topic or not, separate from the question of whether you think a tutoring bot would be any good. I'll worry about that part.

Background

I've been playing around with LLMs a lot in the past couple of months and so far my favorite use case is tutoring. LLM-assistance is helpful via multiple routes such as providing background context with less effort than external search/reading, keeping me engaged via interactivity, generating examples, and breaking down complex sections into more digestible pieces.

Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

LW - Which LessWrong/Alignment topics would you like to be tutored in? [Poll] by Ruby

LW - Which LessWrong/Alignment topics would you like to be tutored in? [Poll] by Ruby

Ruby